9 research outputs found

    Benchmarking airborne laser scanning tree segmentation algorithms in broadleaf forests shows high accuracy only for canopy trees

    Get PDF
    Individual tree segmentation from airborne laser scanning data is a longstanding and important challenge in forest remote sensing. Tree segmentation algorithms are widely available, but robust intercomparison studies are rare due to the difficulty of obtaining reliable reference data. Here we provide a benchmark data set for temperate and tropical broadleaf forests generated from labelled terrestrial laser scanning data. We compared the performance of four widely used tree segmentation algorithms against this benchmark data set. All algorithms performed reasonably well on the canopy trees. The point cloud-based algorithm AMS3D (Adaptive Mean Shift 3D) had the highest overall accuracy, closely followed by the 2D raster based region growing algorithm Dalponte2016 +. However, all algorithms failed to accurately segment the understory trees. This result was consistent across both forest types. This study emphasises the need to assess tree segmentation algorithms directly using benchmark data, rather than comparing with forest indices such as biomass or the number and size distribution of trees. We provide the first openly available benchmark data set for tropical forests and we hope future studies will extend this work to other regions

    Using deep convolutional neural networks to forecast spatial patterns of Amazonian deforestation

    No full text
    Abstract: Tropical forests are subject to diverse deforestation pressures while their conservation is essential to achieve global climate goals. Predicting the location of deforestation is challenging due to the complexity of the natural and human systems involved but accurate and timely forecasts could enable effective planning and on‐the‐ground enforcement practices to curb deforestation rates. New computer vision technologies based on deep learning can be applied to the increasing volume of Earth observation data to generate novel insights and make predictions with unprecedented accuracy. Here, we demonstrate the ability of deep convolutional neural networks (CNNs) to learn spatiotemporal patterns of deforestation from a limited set of freely available global data layers, including multispectral satellite imagery, the Hansen maps of annual forest change (2001–2020) and the ALOS PALSAR digital surface model, to forecast deforestation (2021). We designed four model architectures, based on 2D CNNs, 3D CNNs, and Convolutional Long Short‐Term Memory (ConvLSTM) Recurrent Neural Networks (RNNs), to produce spatial maps that indicate the risk to each forested pixel (~30 m) in the landscape of becoming deforested within the next year. They were trained and tested on data from two ~80,000 km2 tropical forest regions in the Southern Peruvian Amazon. The networks could predict the location of future forest loss to a high degree of accuracy (F1 = 0.58–0.71). Our best performing model (3D CNN) had the highest pixel‐wise accuracy (F1 = 0.71) when validated on 2020 forest loss (2014–2019 training). Visual interpretation of the mapped forecasts indicated that the network could automatically discern the drivers of forest loss from the input data. For example, pixels around new access routes (e.g. roads) were assigned high risk, whereas this was not the case for recent, concentrated natural loss events (e.g. remote landslides). Convolutional neural networks can harness limited time‐series data to predict near‐future deforestation patterns, an important step in harnessing the growing volume of satellite remote sensing data to curb global deforestation. The modelling framework can be readily applied to any tropical forest location and used by governments and conservation organisations to prevent deforestation and plan protected areas

    Accurate delineation of individual tree crowns in tropical forests from aerial RGB imagery using Mask R‐CNN

    Get PDF
    Tropical forests are a major component of the global carbon cycle and home to two-thirds of terrestrial species. Upper-canopy trees store the majority of forest carbon and can be vulnerable to drought events and storms. Monitoring their growth and mortality is essential to understanding forest resilience to climate change, but in the context of forest carbon storage, large trees are underrepresented in traditional field surveys, so estimates are poorly constrained. Aerial photographs provide spectral and textural information to discriminate between tree crowns in diverse, complex tropical canopies, potentially opening the door to landscape monitoring of large trees. Here we describe a new deep convolutional neural network method, Detectree2, which builds on the Mask R-CNN computer vision framework to recognize the irregular edges of individual tree crowns from airborne RGB imagery. We trained and evaluated this model with 3797 manually delineated tree crowns at three sites in Malaysian Borneo and one site in French Guiana. As an example application, we combined the delineations with repeat lidar surveys (taken between 3 and 6 years apart) of the four sites to estimate the growth and mortality of upper-canopy trees. Detectree2 delineated 65 000 upper-canopy trees across 14 km2 of aerial images. The skill of the automatic method in delineating unseen test trees was good (F1 score = 0.64) and for the tallest category of trees was excellent (F1 score = 0.74). As predicted from previous field studies, we found that growth rate declined with tree height and tall trees had higher mortality rates than intermediate-size trees. Our approach demonstrates that deep learning methods can automatically segment trees in widely accessible RGB imagery. This tool (provided as an open-source Python package) has many potential applications in forest ecology and conservation, from estimating carbon stocks to monitoring forest phenology and restoration
    corecore